Search Results

Documents authored by Dinur, Irit


Document
Invited Talk
Expanders in Higher Dimensions (Invited Talk)

Authors: Irit Dinur

Published in: LIPIcs, Volume 250, 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)


Abstract
Expander graphs have been studied in many areas of mathematics and in computer science with versatile applications, including coding theory, networking, computational complexity and geometry. High-dimensional expanders are a generalization that has been studied in recent years and their promise is beginning to bear fruit. In the talk, I will survey some powerful local to global properties of high-dimensional expanders, and describe several interesting applications, ranging from convergence of random walks to construction of locally testable codes that prove the c³ conjecture (namely, codes with constant rate, constant distance, and constant locality).

Cite as

Irit Dinur. Expanders in Higher Dimensions (Invited Talk). In 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 250, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{dinur:LIPIcs.FSTTCS.2022.4,
  author =	{Dinur, Irit},
  title =	{{Expanders in Higher Dimensions}},
  booktitle =	{42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2022)},
  pages =	{4:1--4:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-261-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{250},
  editor =	{Dawar, Anuj and Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2022.4},
  URN =		{urn:nbn:de:0030-drops-173967},
  doi =		{10.4230/LIPIcs.FSTTCS.2022.4},
  annote =	{Keywords: Expanders}
}
Document
Explicit SoS Lower Bounds from High-Dimensional Expanders

Authors: Irit Dinur, Yuval Filmus, Prahladh Harsha, and Madhur Tulsiani

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We construct an explicit and structured family of 3XOR instances which is hard for O(√{log n}) levels of the Sum-of-Squares hierarchy. In contrast to earlier constructions, which involve a random component, our systems are highly structured and can be constructed explicitly in deterministic polynomial time. Our construction is based on the high-dimensional expanders devised by Lubotzky, Samuels and Vishne, known as LSV complexes or Ramanujan complexes, and our analysis is based on two notions of expansion for these complexes: cosystolic expansion, and a local isoperimetric inequality due to Gromov. Our construction offers an interesting contrast to the recent work of Alev, Jeronimo and the last author (FOCS 2019). They showed that 3XOR instances in which the variables correspond to vertices in a high-dimensional expander are easy to solve. In contrast, in our instances the variables correspond to the edges of the complex.

Cite as

Irit Dinur, Yuval Filmus, Prahladh Harsha, and Madhur Tulsiani. Explicit SoS Lower Bounds from High-Dimensional Expanders. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.ITCS.2021.38,
  author =	{Dinur, Irit and Filmus, Yuval and Harsha, Prahladh and Tulsiani, Madhur},
  title =	{{Explicit SoS Lower Bounds from High-Dimensional Expanders}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.38},
  URN =		{urn:nbn:de:0030-drops-135774},
  doi =		{10.4230/LIPIcs.ITCS.2021.38},
  annote =	{Keywords: High-dimensional expanders, sum-of-squares, integrality gaps}
}
Document
RANDOM
Direct Sum Testing: The General Case

Authors: Irit Dinur and Konstantin Golubev

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
A function f:[n_1] x ... x[n_d]->F_2 is a direct sum if it is of the form f (a_1,...,a_d) = f_1(a_1) oplus ... oplus f_d (a_d), for some d functions f_i:[n_i]->F_2 for all i=1,..., d, and where n_1,...,n_d in N. We present a 4-query test which distinguishes between direct sums and functions that are far from them. The test relies on the BLR linearity test (Blum, Luby, Rubinfeld, 1993) and on the direct product test constructed by Dinur & Steurer (2014). We also present a different test, which queries the function (d+1) times, but is easier to analyze. In multiplicative +/- 1 notation, this reads as follows. A d-dimensional tensor with +/- 1 entries is called a tensor product if it is a tensor product of d vectors with +/- 1 entries, or equivalently, if it is of rank 1. The presented tests can be read as tests for distinguishing between tensor products and tensors that are far from being tensor products.

Cite as

Irit Dinur and Konstantin Golubev. Direct Sum Testing: The General Case. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 40:1-40:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.APPROX-RANDOM.2019.40,
  author =	{Dinur, Irit and Golubev, Konstantin},
  title =	{{Direct Sum Testing: The General Case}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{40:1--40:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.40},
  URN =		{urn:nbn:de:0030-drops-112554},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.40},
  annote =	{Keywords: property testing, direct sum, tensor product}
}
Document
From Local to Robust Testing via Agreement Testing

Authors: Irit Dinur, Prahladh Harsha, Tali Kaufman, and Noga Ron-Zewi

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
A local tester for an error-correcting code is a probabilistic procedure that queries a small subset of coordinates, accepts codewords with probability one, and rejects non-codewords with probability proportional to their distance from the code. The local tester is robust if for non-codewords it satisfies the stronger property that the average distance of local views from accepting views is proportional to the distance from the code. Robust testing is an important component in constructions of locally testable codes and probabilistically checkable proofs as it allows for composition of local tests. In this work we show that for certain codes, any (natural) local tester can be converted to a roubst tester with roughly the same number of queries. Our result holds for the class of affine-invariant lifted codes which is a broad class of codes that includes Reed-Muller codes, as well as recent constructions of high-rate locally testable codes (Guo, Kopparty, and Sudan, ITCS 2013). Instantiating this with known local testing results for lifted codes gives a more direct proof that improves some of the parameters of the main result of Guo, Haramaty, and Sudan (FOCS 2015), showing robustness of lifted codes. To obtain the above transformation we relate the notions of local testing and robust testing to the notion of agreement testing that attempts to find out whether valid partial assignments can be stitched together to a global codeword. We first show that agreement testing implies robust testing, and then show that local testing implies agreement testing. Our proof is combinatorial, and is based on expansion / sampling properties of the collection of local views of local testers. Thus, it immediately applies to local testers of lifted codes that query random affine subspaces in F_q^m, and moreover seems amenable to extension to other families of locally testable codes with expanding families of local views.

Cite as

Irit Dinur, Prahladh Harsha, Tali Kaufman, and Noga Ron-Zewi. From Local to Robust Testing via Agreement Testing. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.ITCS.2019.29,
  author =	{Dinur, Irit and Harsha, Prahladh and Kaufman, Tali and Ron-Zewi, Noga},
  title =	{{From Local to Robust Testing via Agreement Testing}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.29},
  URN =		{urn:nbn:de:0030-drops-101221},
  doi =		{10.4230/LIPIcs.ITCS.2019.29},
  annote =	{Keywords: Local testing, Robust testing, Agreement testing, Affine-invariant codes, Lifted codes}
}
Document
Every Set in P Is Strongly Testable Under a Suitable Encoding

Authors: Irit Dinur, Oded Goldreich, and Tom Gur

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We show that every set in P is strongly testable under a suitable encoding. By "strongly testable" we mean having a (proximity oblivious) tester that makes a constant number of queries and rejects with probability that is proportional to the distance of the tested object from the property. By a "suitable encoding" we mean one that is polynomial-time computable and invertible. This result stands in contrast to the known fact that some sets in P are extremely hard to test, providing another demonstration of the crucial role of representation in the context of property testing. The testing result is proved by showing that any set in P has a strong canonical PCP, where canonical means that (for yes-instances) there exists a single proof that is accepted with probability 1 by the system, whereas all other potential proofs are rejected with probability proportional to their distance from this proof. In fact, we show that UP equals the class of sets having strong canonical PCPs (of logarithmic randomness), whereas the class of sets having strong canonical PCPs with polynomial proof length equals "unambiguous- MA". Actually, for the testing result, we use a PCP-of-Proximity version of the foregoing notion and an analogous positive result (i.e., strong canonical PCPPs of logarithmic randomness for any set in UP).

Cite as

Irit Dinur, Oded Goldreich, and Tom Gur. Every Set in P Is Strongly Testable Under a Suitable Encoding. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 30:1-30:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.ITCS.2019.30,
  author =	{Dinur, Irit and Goldreich, Oded and Gur, Tom},
  title =	{{Every Set in P Is Strongly Testable Under a Suitable Encoding}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{30:1--30:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.30},
  URN =		{urn:nbn:de:0030-drops-101234},
  doi =		{10.4230/LIPIcs.ITCS.2019.30},
  annote =	{Keywords: Probabilistically checkable proofs, property testing}
}
Document
Boolean Function Analysis on High-Dimensional Expanders

Authors: Yotam Dikstein, Irit Dinur, Yuval Filmus, and Prahladh Harsha

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We initiate the study of Boolean function analysis on high-dimensional expanders. We describe an analog of the Fourier expansion and of the Fourier levels on simplicial complexes, and generalize the FKN theorem to high-dimensional expanders. Our results demonstrate that a high-dimensional expanding complex X can sometimes serve as a sparse model for the Boolean slice or hypercube, and quite possibly additional results from Boolean function analysis can be carried over to this sparse model. Therefore, this model can be viewed as a derandomization of the Boolean slice, containing |X(k)|=O(n) points in comparison to binom{n}{k+1} points in the (k+1)-slice (which consists of all n-bit strings with exactly k+1 ones).

Cite as

Yotam Dikstein, Irit Dinur, Yuval Filmus, and Prahladh Harsha. Boolean Function Analysis on High-Dimensional Expanders. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 38:1-38:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dikstein_et_al:LIPIcs.APPROX-RANDOM.2018.38,
  author =	{Dikstein, Yotam and Dinur, Irit and Filmus, Yuval and Harsha, Prahladh},
  title =	{{Boolean Function Analysis on High-Dimensional Expanders}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{38:1--38:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.38},
  URN =		{urn:nbn:de:0030-drops-94421},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.38},
  annote =	{Keywords: high dimensional expanders, Boolean function analysis, sparse model}
}
Document
ETH-Hardness of Approximating 2-CSPs and Directed Steiner Network

Authors: Irit Dinur and Pasin Manurangsi

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
We study 2-ary constraint satisfaction problems (2-CSPs), which can be stated as follows: given a constraint graph G = (V,E), an alphabet set Sigma and, for each edge {u, v}, a constraint C_uv, the goal is to find an assignment sigma from V to Sigma that satisfies as many constraints as possible, where a constraint C_uv is said to be satisfied by sigma if C_uv contains (sigma(u),sigma(v)). While the approximability of 2-CSPs is quite well understood when the alphabet size |Sigma| is constant (see e.g. [37]), many problems are still open when |Sigma| becomes super constant. One open problem that has received significant attention in the literature is whether it is hard to approximate 2-CSPs to within a polynomial factor of both |Sigma| and |V| (i.e. (|Sigma||V|)^Omega(1) factor). As a special case of the so-called Sliding Scale Conjecture, Bellare et al. [5] suggested that the answer to this question might be positive. Alas, despite many efforts by researchers to resolve this conjecture (e.g. [39, 4, 20, 21, 35]), it still remains open to this day. In this work, we separate |V| and |Sigma| and ask a closely related but weaker question: is it hard to approximate 2-CSPs to within a polynomial factor of |V| (while |Sigma| may be super-polynomial in |Sigma|)? Assuming the exponential time hypothesis (ETH), we answer this question positively: unless ETH fails, no polynomial time algorithm can approximate 2-CSPs to within a factor of |V|^{1-o(1)}. Note that our ratio is not only polynomial but also almost linear. This is almost optimal since a trivial algorithm yields an O(|V|)-approximation for 2-CSPs. Thanks to a known reduction [25, 16] from 2-CSPs to the Directed Steiner Network (DSN) problem, our result implies an inapproximability result for the latter with polynomial ratio in terms of the number of demand pairs. Specifically, assuming ETH, no polynomial time algorithm can approximate DSN to within a factor of k^{1/4 - o(1)} where k is the number of demand pairs. The ratio is roughly the square root of the approximation ratios achieved by best known polynomial time algorithms [15, 26], which yield O(k^{1/2 + epsilon})-approximation for every constant epsilon > 0. Additionally, under Gap-ETH, our reduction for 2-CSPs not only rules out polynomial time algorithms, but also fixed parameter tractable (FPT) algorithms parameterized by the number of variables |V|. These are algorithms with running time g(|V|)·|Sigma|^O(1) for some function g. Similar improvements apply for DSN parameterized by the number of demand pairs k.

Cite as

Irit Dinur and Pasin Manurangsi. ETH-Hardness of Approximating 2-CSPs and Directed Steiner Network. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 36:1-36:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.ITCS.2018.36,
  author =	{Dinur, Irit and Manurangsi, Pasin},
  title =	{{ETH-Hardness of Approximating 2-CSPs and Directed Steiner Network}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{36:1--36:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.36},
  URN =		{urn:nbn:de:0030-drops-83670},
  doi =		{10.4230/LIPIcs.ITCS.2018.36},
  annote =	{Keywords: Hardness of Approximation, Constraint Satisfaction Problems, Directed Steiner Network, Parameterized Complexity}
}
Document
Multiplayer Parallel Repetition for Expanding Games

Authors: Irit Dinur, Prahladh Harsha, Rakesh Venkat, and Henry Yuen

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
We investigate the value of parallel repetition of one-round games with any number of players k>=2. It has been an open question whether an analogue of Raz's Parallel Repetition Theorem holds for games with more than two players, i.e., whether the value of the repeated game decays exponentially with the number of repetitions. Verbitsky has shown, via a reduction to the density Hales-Jewett theorem, that the value of the repeated game must approach zero, as the number of repetitions increases. However, the rate of decay obtained in this way is extremely slow, and it is an open question whether the true rate is exponential as is the case for all two-player games. Exponential decay bounds are known for several special cases of multi-player games, e.g., free games and anchored games. In this work, we identify a certain expansion property of the base game and show all games with this property satisfy an exponential decay parallel repetition bound. Free games and anchored games satisfy this expansion property, and thus our parallel repetition theorem reproduces all earlier exponential-decay bounds for multiplayer games. More generally, our parallel repetition bound applies to all multiplayer games that are *connected* in a certain sense. We also describe a very simple game, called the GHZ game, that does not satisfy this connectivity property, and for which we do not know an exponential decay bound. We suspect that progress on bounding the value of this the parallel repetition of the GHZ game will lead to further progress on the general question.

Cite as

Irit Dinur, Prahladh Harsha, Rakesh Venkat, and Henry Yuen. Multiplayer Parallel Repetition for Expanding Games. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 37:1-37:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.ITCS.2017.37,
  author =	{Dinur, Irit and Harsha, Prahladh and Venkat, Rakesh and Yuen, Henry},
  title =	{{Multiplayer Parallel Repetition for Expanding Games}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{37:1--37:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.37},
  URN =		{urn:nbn:de:0030-drops-81575},
  doi =		{10.4230/LIPIcs.ITCS.2017.37},
  annote =	{Keywords: Parallel Repetition, Multi-player, Expander}
}
Document
Cube vs. Cube Low Degree Test

Authors: Amey Bhangale, Irit Dinur, and Inbal Livni Navon

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
We revisit the Raz-Safra plane-vs.-plane test and study the closely related cube vs. cube test. In this test the tester has access to a "cubes table" which assigns to every cube a low degree polynomial. The tester randomly selects two cubes (affine sub-spaces of dimension 3) that intersect on a point x in F^m, and checks that the assignments to the cubes agree with each other on the point x. Our main result is a new combinatorial proof for a low degree test that comes closer to the soundness limit, as it works for all epsilon >= poly(d)/{|F|}^{1/2}, where d is the degree. This should be compared to the previously best soundness value of epsilon >= poly(m, d)/|F|^{1/8}. Our soundness limit improves upon the dependence on the field size and does not depend on the dimension of the ambient space. Our proof is combinatorial and direct: unlike the Raz-Safra proof, it proceeds in one shot and does not require induction on the dimension of the ambient space. The ideas in our proof come from works on direct product testing which are even simpler in the current setting thanks to the low degree. Along the way we also prove a somewhat surprising fact about connection between different agreement tests: it does not matter if the tester chooses the cubes to intersect on points or on lines: for every given table, its success probability in either test is nearly the same.

Cite as

Amey Bhangale, Irit Dinur, and Inbal Livni Navon. Cube vs. Cube Low Degree Test. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 40:1-40:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bhangale_et_al:LIPIcs.ITCS.2017.40,
  author =	{Bhangale, Amey and Dinur, Irit and Livni Navon, Inbal},
  title =	{{Cube vs. Cube Low Degree Test}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{40:1--40:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.40},
  URN =		{urn:nbn:de:0030-drops-81748},
  doi =		{10.4230/LIPIcs.ITCS.2017.40},
  annote =	{Keywords: Low Degree Test, Probabilistically Checkable Proofs, Locally Testable Codes}
}
Document
Exponentially Small Soundness for the Direct Product Z-Test

Authors: Irit Dinur and Inbal Livni Navon

Published in: LIPIcs, Volume 79, 32nd Computational Complexity Conference (CCC 2017)


Abstract
Given a function f:[N]^k->[M]^k, the Z-test is a three query test for checking if a function f is a direct product, namely if there are functions g_1,...g_k:[N]->[M] such that f(x_1,...,x_k)=(g_1(x_1),...,g_k(x_k)) for every input x in [N]^k. This test was introduced by Impagliazzo et. al. (SICOMP 2012), who showed that if the test passes with probability epsilon > exp(-sqrt k) then f is Omega(epsilon) close to a direct product function in some precise sense. It remained an open question whether the soundness of this test can be pushed all the way down to exp(-k) (which would be optimal). This is our main result: we show that whenever f passes the Z test with probability epsilon > exp(-k), there must be a global reason for this: namely, f must be close to a product function on some Omega(epsilon) fraction of its domain. Towards proving our result we analyze the related (two-query) V-test, and prove a "restricted global structure" theorem for it. Such theorems were also proven in previous works on direct product testing in the small soundness regime. The most recent work, by Dinur and Steurer (CCC 2014), analyzed the V test in the exponentially small soundness regime. We strengthen their conclusion of that theorem by moving from an "in expectation" statement to a stronger "concentration of measure" type of statement, which we prove using hyper-contractivity. This stronger statement allows us to proceed to analyze the Z test. We analyze two variants of direct product tests. One for functions on ordered tuples, as above, and another for functions on sets of size k. The work of Impagliazzo et al. was actually focused only on functions of the latter type, i.e. on sets. We prove exponentially small soundness for the Z-test for both variants. Although the two appear very similar, the analysis for tuples is more tricky and requires some additional ideas.

Cite as

Irit Dinur and Inbal Livni Navon. Exponentially Small Soundness for the Direct Product Z-Test. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 29:1-29:50, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.CCC.2017.29,
  author =	{Dinur, Irit and Livni Navon, Inbal},
  title =	{{Exponentially Small Soundness for the Direct Product Z-Test}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{29:1--29:50},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.29},
  URN =		{urn:nbn:de:0030-drops-75336},
  doi =		{10.4230/LIPIcs.CCC.2017.29},
  annote =	{Keywords: Direct Product Testing, Property Testing, Agreement}
}
Document
Toward the KRW Composition Conjecture: Cubic Formula Lower Bounds via Communication Complexity

Authors: Irit Dinur and Or Meir

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
One of the major challenges of the research in circuit complexity is proving super-polynomial lower bounds for de-Morgan formulas. Karchmer, Raz, and Wigderson suggested to approach this problem by proving that formula complexity behaves "as expected" with respect to the composition of functions f * g. They showed that this conjecture, if proved, would imply super-polynomial formula lower bounds. The first step toward proving the KRW conjecture was made by Edmonds et al., who proved an analogue of the conjecture for the composition of "universal relations". In this work, we extend the argument of Edmonds et al. further to f * g where f is an arbitrary function and g is the parity function. While this special case of the KRW conjecture was already proved implicitly in Hastad's work on random restrictions, our proof seems more likely to be generalizable to other cases of the conjecture. In particular, our proof uses an entirely different approach, based on communication complexity technique of Karchmer and Wigderson. In addition, our proof gives a new structural result, which roughly says that the naive way for computing f * g is the only optimal way. Along the way, we obtain a new proof of the state-of-the-art formula lower bound of n^{3-o(1)} due to Hastad.

Cite as

Irit Dinur and Or Meir. Toward the KRW Composition Conjecture: Cubic Formula Lower Bounds via Communication Complexity. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 3:1-3:51, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.CCC.2016.3,
  author =	{Dinur, Irit and Meir, Or},
  title =	{{Toward the KRW Composition Conjecture: Cubic Formula Lower Bounds via Communication Complexity}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{3:1--3:51},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.3},
  URN =		{urn:nbn:de:0030-drops-58412},
  doi =		{10.4230/LIPIcs.CCC.2016.3},
  annote =	{Keywords: Formula lower bounds, communication complexity, Karchmer-Wigderson games, KRW composition conjecture}
}
Document
Derandomized Graph Product Results Using the Low Degree Long Code

Authors: Irit Dinur, Prahladh Harsha, Srikanth Srinivasan, and Girish Varma

Published in: LIPIcs, Volume 30, 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)


Abstract
In this paper, we address the question of whether the recent derandomization results obtained by the use of the low-degree long code can be extended to other product settings. We consider two settings: (1) the graph product results of Alon, Dinur, Friedgut and Sudakov [GAFA, 2004] and (2) the "majority is stablest" type of result obtained by Dinur, Mossel and Regev [SICOMP, 2009] and Dinur and Shinkar [In Proc. APPROX, 2010] while studying the hardness of approximate graph coloring. In our first result, we show that there exists a considerably smaller subgraph of K_3^{\otimes R} which exhibits the following property (shown for K_3^{\otimes R} by Alon et al.): independent sets close in size to the maximum independent set are well approximated by dictators. The "majority is stablest" type of result of Dinur et al. and Dinur and Shinkar shows that if there exist two sets of vertices A and B in K_3^{\otimes R} with very few edges with one endpoint in A and another in B, then it must be the case that the two sets A and B share a single influential coordinate. In our second result, we show that a similar "majority is stablest" statement holds good for a considerably smaller subgraph of K_3^{\otimes R}. Furthermore using this result, we give a more efficient reduction from Unique Games to the graph coloring problem, leading to improved hardness of approximation results for coloring.

Cite as

Irit Dinur, Prahladh Harsha, Srikanth Srinivasan, and Girish Varma. Derandomized Graph Product Results Using the Low Degree Long Code. In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 30, pp. 275-287, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.STACS.2015.275,
  author =	{Dinur, Irit and Harsha, Prahladh and Srinivasan, Srikanth and Varma, Girish},
  title =	{{Derandomized Graph Product Results Using the Low Degree Long Code}},
  booktitle =	{32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)},
  pages =	{275--287},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-78-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{30},
  editor =	{Mayr, Ernst W. and Ollinger, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2015.275},
  URN =		{urn:nbn:de:0030-drops-49200},
  doi =		{10.4230/LIPIcs.STACS.2015.275},
  annote =	{Keywords: graph product, derandomization, low degree long code, graph coloring}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail